All right, so I wanted to take 10 minutes in the same spirit of the talk of Felix and
Fariba to present our chair and the current projects and opportunities we have.
So this talk is more targeted probably to master and PhD students and also postdocs
why not listening because again my goal is to tell you what kind of opportunities we have here
at FAU specifically at our chair and to start let me tell you very quickly what we work on,
right? So the name of our chair is Chair for Dynamics, Control, Machine Learning and Numerics
and these four words of course are the core of our research. So we work in the broad field of
applied mathematics and something I want to stress is that this is still a mathematical chair,
okay? So it's not really a chair in data science. We are of course very interested in the let's say
applications of machine learning to concrete problems for instance but what we are more
interested in is as Enrique said today in the opening high fidelity, right? So convergence
and proving something. So let's say machine learning is actually something that we integrate
in our pipeline, okay? But it's not the main goal to you know actually develop an algorithm
and just use it on some real-world data and to give you a better idea of what are the main
some of the topics that you might that we work on and that you might hear if you discuss with us
and then maybe this part of the room can tell me if I'm saying if I'm complete or not, right? But
you can see here some bad words so we have words maybe more related to PDEs like hyperbolic PDEs,
transport equations, particle models and so on. Some words are related to numerics like reduced
order modeling, then we have optimization with again optimization of control and finally we have
the machine learning part so we work on transformers, we work on neural ODEs,
federated learning and so on. Now this slide is actually not a very good slide what I should show
you would be more something like this, right? Because all of these topics are very interconnected
one to the other and something that we value very much at least in my experience then again
you can you can say I'm wrong of course but in my experience we value a lot synergy let's say
between people at our chair. So like the first the first word here is dynamics and this is
obvious obviously related to mathematics but I would say this is a very dynamical place.
Okay since I came here one year ago I had the opportunity of working with many different people
actually in the chair and it's always very interesting to see what everyone brings to a
new problem, right? So maybe I'm more expert in some theoretical mathematics and there are some
people more expert in applied and so on, right? So this is not maybe the place if you want to let's
say come to our chair and just stay in your office and do your own research without talking to anyone
this is not the place. So yeah this is to give a very quick introduction on our chair and then
let me get into the project. So our chair is involved in several several projects now I could
do a list of all of them and tell you what each of them is about but it's already 5.30 and it would
be a bit like the not teaching the implicit function theorem at 5.30 and nobody would listen
probably. So let me just focus on the first one which is the CodeFL project this is an ERC project
and why am I focusing on this one? Well because we currently have two open postdoc positions for in
the in the context of CodeFL projects and of course you can scan the QR code here or you can
go on the website in any case I will upload these slides on the on the website so you can get any
website that you see here you can get it later. And yeah so the CodeFL project then of course
professor Dodua can tell me if I'm correct or not but this is an ERC project which will last from
2024 to 2029 and the name CodeFL stands for control for deep and federated learning. So now
it's pretty much a well-known fact that control can play a very important role in explaining
machine learning models. Okay so this project aims to really study what can we say about deep
learning using control theoretical tools and most importantly also what can we say on federated
learning. Now deep learning you all know what it is. Federated learning maybe not all of you might
know what it is but nowadays this is a I would say a crucial learning paradigm because the idea
of federated learning is to train a machine learning model but in a collaborative way. So you train
a machine learning model using my data using your data and so on like Chagypt for instance.
Okay so Chagypt trains also on the conversation that we have on Chagypt but clearly some very
Presenters
Dr. Lorenzo Liverani
Zugänglich über
Offener Zugang
Dauer
00:11:32 Min
Aufnahmedatum
2025-04-28
Hochgeladen am
2025-04-29 13:07:52
Sprache
en-US
• Alessandro Coclite. Politecnico di Bari
• Fariba Fahroo. Air Force Office of Scientific Research
• Giovanni Fantuzzi. FAU MoD/DCN-AvH, Friedrich-Alexander-Universität Erlangen-Nürnberg
• Borjan Geshkovski. Inria, Sorbonne Université
• Paola Goatin. Inria, Sophia-Antipolis
• Shi Jin. SJTU, Shanghai Jiao Tong University
• Alexander Keimer. Universität Rostock
• Felix J. Knutson. Air Force Office of Scientific Research
• Anne Koelewijn. FAU MoD, Friedrich-Alexander-Universität Erlangen-Nürnberg
• Günter Leugering. FAU, Friedrich-Alexander-Universität Erlangen-Nürnberg
• Lorenzo Liverani. FAU, Friedrich-Alexander-Universität Erlangen-Nürnberg
• Camilla Nobili. University of Surrey
• Gianluca Orlando. Politecnico di Bari
• Michele Palladino. Università degli Studi dell’Aquila
• Gabriel Peyré. CNRS, ENS-PSL
• Alessio Porretta. Università di Roma Tor Vergata
• Francesco Regazzoni. Politecnico di Milano
• Domènec Ruiz-Balet. Université Paris Dauphine
• Daniel Tenbrinck. FAU, Friedrich-Alexander-Universität Erlangen-Nürnberg
• Daniela Tonon. Università di Padova
• Juncheng Wei. Chinese University of Hong Kong
• Yaoyu Zhang. Shanghai Jiao Tong University
• Wei Zhu. Georgia Institute of Technology